47 research outputs found

    Dynamic smoothness parameter for fast gradient methods

    Get PDF
    We present and computationally evaluate a variant of the fast gradient method by Nesterov that is capable of exploiting information, even if approximate, about the optimal value of the problem. This information is available in some applications, among which the computation of bounds for hard integer programs. We show that dynamically changing the smoothness parameter of the algorithm using this information results in a better convergence profile of the algorithm in practice

    Dynamic Smoothness Parameter for Fast Gradient Methods

    Get PDF
    We present and computationally evaluate a variant of the fast gradient method by Nesterov that is capable of exploiting information, even if approximate, about the optimal value of the problem. This information is available in some applications, among which the computation of bounds for hard integer programs. We show that dynamically changing the smoothness parameter of the algorithm using this information results in a better convergence profile of the algorithm in practice

    On the Computational Efficiency of Subgradient Methods: a Case Study with Lagrangian Bounds

    Get PDF
    Subgradient methods (SM) have long been the preferred way to solve the large-scale Nondifferentiable Optimization problems arising from the solution of Lagrangian Duals (LD) of Integer Programs (IP). Although other methods can have better convergence rate in practice, SM have certain advantages that may make them competitive under the right conditions. Furthermore, SM have significantly progressed in recent years, and new versions have been proposed with better theoretical and practical performances in some applications. We computationally evaluate a large class of SM in order to assess if these improvements carry over to the IP setting. For this we build a unified scheme that covers many of the SM proposed in the literature, comprised some often overlooked features like projection and dynamic generation of variables. We fine-tune the many algorithmic parameters of the resulting large class of SM, and we test them on two different Lagrangian duals of the Fixed-Charge Multicommodity Capacitated Network Design problem, in order to assess the impact of the characteristics of the problem on the optimal algorithmic choices. Our results show that, if extensive tuning is performed, SM can be competitive with more sophisticated approaches when the tolerance required for solution is not too tight, which is the case when solving LDs of IPs

    High quality timetables for Italian schools

    Get PDF
    This work introduces a complex variant of the timetabling problem, which is motivated by the case of Italian schools. The new requirements enforce to (i) provide teachers with the same idle times, (ii) avoid consecutive days with heavy workload, (iii) limit multiple daily lessons for each class, (iv) introduce shorter time units to differentiate entry and exit times. We present an integer programming model for this problem, which is denoted by Italian High School Timetabling Problem (IHSTP). However, requirements (i), (ii), (iii) and (iv) cannot be expressed according to the current XHSTT standard. Since the IHSTP model is very hard to solve by an off-the-shelf solver, we present a two-step optimization method: the first step optimally assigns teachers to lesson times and the second step assigns classes to teachers. An extensive experimentation is performed on the model by realistic and real instances from Italian schools, as well as benchmark instances from the literature. Finally, the experiments show that the method is effective in solving both this new problem and the simplified problem without the new requirements

    Polyhedral separation via difference of convex (DC) programming

    Get PDF
    We consider polyhedral separation of sets as a possible tool in supervised classification. In particular, we focus on the optimization model introduced by Astorino and Gaudioso (J Optim Theory Appl 112(2):265–293, 2002) and adopt its reformulation in difference of convex (DC) form. We tackle the problem by adapting the algorithm for DC programming known as DCA. We present the results of the implementation of DCA on a number of benchmark classification datasets

    I carcinomi tiroidei differenziati

    Get PDF
    Gli Autori, dopo un excursus sulla patologia tiroidea neoplastica, si soffermano sulla epidemiologia, l’etiologia e la diagnostica dei carcinomi tiroidei ben differenziati. Riportano quindi la loro casistica e la pluriennale esperienza nel trattamento di questi pazienti affetti da una patologia generalmente considerata a bassa malignità. Essi ritengono che oggi il trattamento di scelta sia costituito dalla tiroidectomia totale con linfectomia del compartimento centrale ed eventuale associazione di radioterapia metabolic

    Node-based Lagrangian relaxations for multicommodity capacitated fixed-charge network design

    Get PDF
    Classical Lagrangian relaxations for the multicommodity capacitated fixed-charge network design problem are the so-called flow and knapsack relaxations, where the resulting Lagrangian subproblems decompose by commodities and by arcs, respectively. We introduce node-based Lagrangian relaxations, where the resulting Lagrangian subproblem decomposes by nodes. We show that the Lagrangian dual bounds of these relaxations improve upon the linear programming relaxation bound, known to be equal to the Lagrangian dual bounds for the flow and knapsack relaxations. We also develop a Lagrangian matheuristic to compute upper bounds. The computational results on a set of benchmark instances show that the Lagrangian matheuristic is competitive with the state-of-the-art heuristics from the literature

    Generalized Bundle Methods for Sum-Functions with "Easy" Components: Applications to Multicommodity Network Design

    No full text
    We propose a version of the (generalized) bundle scheme for convex nondifferentiable optimization suitable for the case of a sum-function where some of the components are &quot;easy&quot;, that is, they are Lagrangian functions of explicitly known compact convex programs. This corresponds to a stabilized partial Dantzig-Wolfe decomposition, where suitably modified representations of the &quot;easy&quot; convex subproblems are inserted in the master problem as an alternative to iteratively inner-approximating them by extreme points, thus providing the algorithm with exact information about a part of the dual objective function. The resulting master problems are potentially larger and less well-structured than the standard ones, ruling out the available specialized techniques and requiring the use of general-purpose solvers for their solution; this strongly favors piecewise-linear stabilizing terms, as opposed to the more usual quadratic ones. This in turn may have an adverse effect on the convergence speed of the algorithm, so that the overall performance may depend on appropriate tuning of all these aspects. Yet, very good computational results are obtained in at least one relevant application: the computation of tight lower bounds for Fixed-Charge Multicommodity Min-Cost Flow problems. <br /

    Nonsmooth optimization: Theory and algorithms

    No full text
    This is a summary of the author's PhD thesis supervised by Manlio Gaudioso and Maria Flavia Monaco and defended on 21 February 2008 at the Università della Calabria. The thesis is a survey on nonsmooth optimization methods for both convex and nonconvex functions. The main contribution of the dissertation is the presentation of a new bundle type method. The thesis is written in English and is available from http://www2.deis.unical.it/logilab/gorgone. © 2009 Springer-Verlag
    corecore